bayesian gan
Bayesian GAN
Generative adversarial networks (GANs) can implicitly learn rich distributions over images, audio, and data which are hard to model with an explicit likelihood. We present a practical Bayesian formulation for unsupervised and semi-supervised learning with GANs. Within this framework, we use stochastic gradient Hamiltonian Monte Carlo to marginalize the weights of the generator and discriminator networks. The resulting approach is straightforward and obtains good performance without any standard interventions such as feature matching or mini-batch discrimination. By exploring an expressive posterior over the parameters of the generator, the Bayesian GAN avoids mode-collapse, produces interpretable and diverse candidate samples, and provides state-of-the-art quantitative results for semi-supervised learning on benchmarks including SVHN, CelebA, and CIFAR-10, outperforming DCGAN, Wasserstein GANs, and DCGAN ensembles.
To Reviewer # 1
We thank all reviewers for their valuable comments. In what follows, we will respond to their concerns one by one. The author should evaluate different attack methods and show the experimental results. Here we refer to the attack used in our paper as the threshold attack. Actually, we have also shown the results of other attacks (i.e.
Reviews: Hierarchical Implicit Models and Likelihood-Free Variational Inference
The paper defines a class of probability models -- hierarchical implicit models -- consisting of observations with associated'local' latent variables that are conditionally independent given a set of'global' latent variables, and in which the observation likelihood is not assumed to be tractable. It describes an approach for KL-based variational inference in such'likelihood-free' models, using a GAN-style discriminator to estimate the log ratio between a'variational joint' q(x, z), constructed using the empirical distribution on observations, and the true model joint density. This approach has the side benefit of supporting implicit variational models ('variational programs'). Proof-of-concept applications are demonstrated to ecological simulation, a Bayesian GAN, and sequence modeling with a stochastic RNN. The exposition is very clear, well cited, and the technical machinery is carefully explained.
Reviews: Bayesian GAN
Summary: The paper introduces a Bayesian type of GAN algorithms, where the generator G and discriminator D do not have any fixed initial set of weights that gets gradually optimised. Instead, the weights for G and for D get sampled from two distributions (one for each), and it is those distributions that get iteratively updated. Different weight realisations of G may thus generate images with different styles, corresponding to different modes in the dataset. This, and the regularisation effect of the priors on the weights, promotes diversity and alleviates the mode collapse issue. The many experiments conducted in the paper support these claims.
Reviews: Bayesian Adversarial Learning
This paper proposes a Bayesian model for adversarial learning problem. Empirical studies on Fashion-MINST and traffic sign recognition show that the proposed methods is slightly better than other adversarial learning baselines. Below I list my concerns about the paper: For modeling, 1. This paper ignore a highly relevant work'Bayesian GAN' [1]. The non-cooperative game between'data generator' and'learner' established in this paper is almost the same as the vanilla GAN.
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.86)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.53)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.53)
Bayesian GAN
Yunus Saatci, Andrew G. Wilson
Generative adversarial networks (GANs) can implicitly learn rich distributions over images, audio, and data which are hard to model with an explicit likelihood. We present a practical Bayesian formulation for unsupervised and semi-supervised learning with GANs. Within this framework, we use stochastic gradient Hamiltonian Monte Carlo to marginalize the weights of the generator and discriminator networks. The resulting approach is straightforward and obtains good performance without any standard interventions such as label smoothing or mini-batch discrimination. By exploring an expressive posterior over the parameters of the generator, the Bayesian GAN avoids mode-collapse, produces interpretable and diverse candidate samples, and provides state-of-the-art quantitative results for semi-supervised learning on benchmarks including SVHN, CelebA, and CIFAR-10, outperforming DCGAN, Wasserstein GANs, and DCGAN ensembles.
Generative Adversarial Networks (GANs) & Bayesian Networks - DataScienceCentral.com
Generative Adversarial Networks (GANs) software is software for producing forgeries and imitations of data (aka synthetic data, fake data). Human beings have been making fakes, with good or evil intent, of almost everything they possibly can, since the beginning of the human race. Thus, perhaps not too surprisingly, GAN software has been widely used since it was first proposed in this amazingly recent 2014 paper. To gauge how widely GAN software has been used so far, see, for example, this 2019 article entitled "18 Impressive Applications of Generative Adversarial Networks (GANs)" Sounds (voices, music,…), Images (realistic pictures, paintings, drawings, handwriting, …), Text,etc. The forgeries can be tweaked so that they range from being very similar to the originals, to being whimsical exaggerations thereof.
- Information Technology > Artificial Intelligence > Machine Learning > Unsupervised or Indirectly Supervised Learning (0.88)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.88)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.48)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.48)
Generative Adversarial Networks (GANs) & Bayesian Networks
Generative Adversarial Networks (GANs) software is software for producing forgeries and imitations of data (aka synthetic data, fake data). Human beings have been making fakes, with good or evil intent, of almost everything they possibly can, since the beginning of the human race. Thus, perhaps not too surprisingly, GAN software has been widely used since it was first proposed in this amazingly recent 2014 paper. To gauge how widely GAN software has been used so far, see, for example, this 2019 article entitled "18 Impressive Applications of Generative Adversarial Networks (GANs)" Sounds (voices, music,...), Images (realistic pictures, paintings, drawings, handwriting, ...), Text,etc. The forgeries can be tweaked so that they range from being very similar to the originals, to being whimsical exaggerations thereof.
- Information Technology > Artificial Intelligence > Machine Learning > Unsupervised or Indirectly Supervised Learning (0.88)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.88)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.48)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.48)
Bayesian GAN
Saatci, Yunus, Wilson, Andrew G.
Generative adversarial networks (GANs) can implicitly learn rich distributions over images, audio, and data which are hard to model with an explicit likelihood. We present a practical Bayesian formulation for unsupervised and semi-supervised learning with GANs. Within this framework, we use stochastic gradient Hamiltonian Monte Carlo to marginalize the weights of the generator and discriminator networks. The resulting approach is straightforward and obtains good performance without any standard interventions such as feature matching or mini-batch discrimination. By exploring an expressive posterior over the parameters of the generator, the Bayesian GAN avoids mode-collapse, produces interpretable and diverse candidate samples, and provides state-of-the-art quantitative results for semi-supervised learning on benchmarks including SVHN, CelebA, and CIFAR-10, outperforming DCGAN, Wasserstein GANs, and DCGAN ensembles.